Consider the pair of nonlinear differential equations
As a first approach to a solution for this system, expand the two functions about some nonsingular point in power series:
Since the right-hand side of both equations contains a product, the denominators in the series will be useful for compactness. The system becomes
Equating coefficients of equal powers gives
which can be evaluated recursively as desired. For comparison with a previous presentation, constrain the functions with . Then the first few nonzero coefficients in terms of the remaining initial values for each function are
with the odd coefficients identically zero. The coefficients of these polynomials are not known sequences on OEIS, so that reference is of no help in identifying the underlying analytic functions. The coefficients without the constraint are even more complicated, making their identification that much more problematic.
When both functions have the same initial conditions, the two columns of coefficients are equal, both for the constrained and unconstrained cases. This indicates a common special solution, which can be determined with the integration
where the constant of integration is chosen purposefully. The third-order polynomial under the radical implies an inverse elliptic function upon integration, namely that of Weierstrass with the first parameter . One then has
where the two elliptic parameters have been omitted for conciseness. The second elliptic parameter can be determined by remembering that in this present case
so that one has
It is straightforward to verify by expansion that the coefficients of this function match those determined recursively.
This second-order system has an exact constant, but it is more complicated than that of the corresponding first-order system. Add a pair of equivalences and integrate both sides:
Since this constant is nonlinear in the two functions, it cannot be used for immediately direct integration as for the first-order system: it does not produce the separable equations needed for that.
One can easily form separate fourth-order equations for each function. The first one has a curious structure:
The common special solution satisfies this equation, but is is not clear how to find a more general solution. The second fourth-order equation is not as pretty:
The common special solution satisfies this equation as well, but identifying a more general solution is still problematic.
One can use the exact constant to form separate third-order equations for each function, starting with their first derivatives:
Since the common special solution arises from an inverse integral, one would like to cast these equations in a similar form. The problem is that there are derivatives with respect to the independent variable under the radicals. One way to remove them in the left-hand radicals is by defining the intermediate functions
in terms of which the equations take the simpler forms
which can then be rearranged to expressions appropriate for inverse integrals:
The presence of a second derivative with respect to the independent variable in the right-hand equation is a temporary complication, as will be seen.
Both of the intermediate functions are defined in terms of a second derivative: if one uses the function under the derivative as a new variable, then a simple quadrature is possible. That is, let
so that the intermediate functions are given by
which are easily rearranged to
Multiplying both sides of each equation by the appropriate derivative and integrating gives
The changes of variable were made simply to facilitate these integrations. They can be reversed with the differentials
so that the first derivatives with respect to the independent variable are
or in expressions appropriate for inverse integrals
One is now in possession of two such expressions for each function. For the first function, the two left-hand sides do not contain the independent variable, so one can easily form the equivalence
The corresponding statement for the second function requires removal of the second derivative with respect to the independent variable, but that can be done using the definition of the second intermediate function. Recognizing that
one can then write
These two equations are strange creatures, being mixtures of integrals and nonlinear derivatives. What is clear is that each determines one of the intermediate functions in terms of a function in the original system, without reference to the independent variable. This allows one to understand the nature of the functions in the original system compared to the common special solution.
To get an idea of the basic forms of their solutions, let
in the first equation. Keeping the two lowest-order terms on the left-hand side, the equation becomes
For the first term to be equal to a constant, one must have . Setting coefficients equal on the two sides of this equation gives two relations for three variables, so let the second coefficient be temporarily independent:
For the second intermediate function, let
in the second equation. Keeping the two lowest-order terms on both sides of the equation, one has
For the first terms on each side to be equal, one must again have . Setting coefficients equal on the two sides again gives two relations for three variables, so let the second coefficient be temporarily independent:
It is clear from these initial expansions that the common choice leads to full expansions in powers of three of the respective independent variable:
Having recognized that the intermediate functions are both functions of cubed variables, one is tempted to try to write their determining equations, containing integrals and nonlinear derivatives, in terms of such cubic variables. Unfortunately this does not lead to any particularly useful simplifications: just the nature of the beast.
With the full expansions in powers of three, the equation for the first intermediate function them becomes
while that for the second intermediate function becomes
In moving from the first to the second step, the two sums in the second bracket on the right-hand side combine into a sum with no constant term.
Both of these expressions have a square of infinite series on the left-hand side. One can pick out a particular power of such a square using
For an explicit recursive definition of the quantities , first set in the sum on the right-hand side. Then equating exponents on both sides requires on the left-hand side. Restricting the final summation index by the term for which , one has
for . Separating terms on the left-hand side containing then gives
For an explicit recursive definition of the quantities , first set in the first sum on the right-hand side. Then equating exponents on both sides again requires on the left-hand side. Restricting the final summation index by the term for which , one has
Separating terms on left-hand side containing then gives
These two explicit recursive definitions are certainly not simple and not likely susceptible to closed-form solutions. They do however have definite usefulness. Evaluating the first few values of each reveals a critical pattern: every term in has at least one power of , every term in has at least one power of or , every term in has at least one power of , or , and so on, with the same behavior for the mutatis mutandis. That means that all coefficients with index three or higher inherit the behavior of the second coefficients, which have already been evaluated above.
The end result is that the two intermediate functions can be written compactly as
where a prime on a coefficient indicates that the factor in front of the sum has been excluded.
It is now clear that the solution to the differential system consists of functions more complicated that the Weierstrass elliptic function. Each denominator of the expressions appropriate for inverse integrals includes a possibly infinite series of powers of three in these functions. The resulting integrals go beyond the inverse Weierstrass elliptic function in terms of detail, but still have the same general structure: the solution functions are inverses of these integrals, just as for the common special solution.
That is, with the definitions
the solution to the original system of equations is schematically
The defined inverse functions become more dramatic with the introduction of the compact forms for the intermediate functions. Applying the simple integrals
the expressions under the square roots become
The inverse functions themselves are then
which are patently more complicated than the inverse Weierstrass elliptic function, as previously noted.
Two special cases jump out immediately: setting in the first inverse function gives
With the identification this is just the common special solution. It is easy to check that the constants as defined satisfy this identification for equal initial conditions.
Setting in the second inverse function gives
which is again the common special solution with the identification .
As a final step, one needs to determine the values of and in terms of the initial conditions of the differential system. Given that
one can set for the pair of equations
where
.
Since all of the coefficients
are given recursively in terms of
,
and all of the
in terms of
,
this is in principle the desired determination, albeit in infinite equations. While it would be desirable to see how the special values of
and
arise when
and
The bottom line of this presentation is that the solution to the differential system requires functions not among currently known special functions. Given that the system appears so simple, that in itself is interesting information.
Uploaded 2024.12.01 analyticphysics.com